Multiple alerts getting mixed in one incident in prometheus integration

We are running multiple individual prometheus stack on multiple EKS clusters, so each has its on own alert manager running. And sending the alerts to a service directory using prometheus integration (incident_key) in alert-manager config.

Which looks like -
- name: pagerduty-demo
pagerduty_configs:
- service_key: ‘xxx40bdb9xxxx09c07792xxxx’

And have tried some workaround with group_by parameters, but still it wouldn’t solve the issue. The issue is that - For multiple clusters, alerts are getting grouped in a single incident. A single incident is having - xyz alert for multiple clusters which is getting complex to keep track of critical alerts. Can someone help me on this how this can be resolved ?

Hi Gaurav,

It sounds like you may be running into this issue due to the dedup key of all of the alerts being the same. You can set/change a dedup key with event rules. We have more information regarding that in our Knowledge Base here.

If you’d like the support team to look into the particular alerts/incidents, please email us support@pagerduty.com.

Thanks,

As Abbott mentioned, check the deduplication key you’re creating (likely need to customize your alert manager configurations) or consider an event orchestration rule that extracts event data and creates the desired deduplication keys and resulting incidents. Content Based Alert Grouping (CBAG) is another option if you’re using our AIOps solution.